Worker Perception of Quality Assurance Mechanisms in Crowdsourcing and Human Computation Markets

نویسندگان

  • Thimo Schulze
  • Dennis Nordheimer
  • Martin Schader
چکیده

Many human computation systems utilize crowdsourcing marketplaces to recruit workers. Because of the open nature of these marketplaces, requesters need to use appropriate quality assurance mechanisms to guarantee high quality results. Previous research has mostly focused on the statistical aspects of quality assurance. Instead, we analyze the worker perception of five quality assurance mechanisms (Qualification Test, Qualification Restriction, Gold Standard, Majority Vote, Validating Review) according to subjective (fairness, offense, benefit) and objective (necessity, accuracy, cost) criteria. Based on theory from related areas like labor psychology, we develop a conceptual model and test it with a survey on Mechanical Turk. Our results show big differences in perception, especially with respect to Majority Vote which is rated low by workers. On the basis of these results, we show implications for theory and give requesters on crowdsourcing markets the advice to integrate the worker view when selecting an appropriate quality assurance mechanism.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Behavior-Based Quality Assurance in Crowdsourcing Markets

Quality assurance in crowdsourcing markets has appeared to be an acute problem over the last years. We propose a quality control method inspired by Statistical Process Control (SPC), commonly used to control output quality in production processes and characterized by relying on time-series data. Behavioral traces of users may play a key role in evaluating the performance of work done on crowdso...

متن کامل

Optimization techniques for human computation-enabled data processing systems

Crowdsourced labor markets make it possible to recruit large numbers of people to complete small tasks that are difficult to automate on computers. These marketplaces are increasingly widely used, with projections of over $1 billion being transferred between crowd employers and crowd workers by the end of 2012. While crowdsourcing enables forms of computation that artificial intelligence has no...

متن کامل

Programmatic Gold: Targeted and Scalable Quality Assurance in Crowdsourcing

Crowdsourcing is an effective tool for scalable data annotation in both research and enterprise contexts. Due to crowdsourcing’s open participation model, quality assurance is critical to the success of any project. Present methods rely on EM-style post-processing or manual annotation of large gold standard sets. In this paper we present an automated quality assurance process that is inexpensiv...

متن کامل

Inserting Micro-Breaks into Crowdsourcing Workflows

Participants in human computation workflows may become fatigued or get bored over long, interminable working hours. This leads to a slump of motivation and morale, which in the long run causes reductions in both productivity and work quality. In this paper we propose an initial investigation into possible ways to alleviate worker fatigue and boredom by employing micro-breaks that provide timely...

متن کامل

A Framework for Quality Assurance in Crowdsourcing

The emergence of online paid micro-crowdsourcing platforms, such as Amazon Mechanical Turk (AMT), allows on-demand and at scale distribution of tasks to human workers around the world. In such settings, online workers come and complete small tasks posted by a company, working for as long or as little as they wish. Such temporary employer-employee relationships give rise to adverse selection, mo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013